10 research outputs found

    Possible Bias in Supervised Deep Learning Algorithms for CT Lung Nodule Detection and Classification

    Get PDF
    Artificial Intelligence (AI) algorithms for automatic lung nodule detection and classification can assist radiologists in their daily routine of chest CT evaluation. Even though many AI algorithms for these tasks have already been developed, their implementation in the clinical workflow is still largely lacking. Apart from the significant number of false-positive findings, one of the reasons for that is the bias that these algorithms may contain. In this review, different types of biases that may exist in chest CT AI nodule detection and classification algorithms are listed and discussed. Examples from the literature in which each type of bias occurs are presented, along with ways to mitigate these biases. Different types of biases can occur in chest CT AI algorithms for lung nodule detection and classification. Mitigation of them can be very difficult, if not impossible to achieve completely

    Skip-SCSE Multi-scale Attention and Co-learning Method for Oropharyngeal Tumor Segmentation on Multi-modal PET-CT Images

    Get PDF
    One of the primary treatment options for head and neck cancer is (chemo)radiation. Accurate delineation of the contour of the tumors is of great importance in the successful treatment of the tumor and in the prediction of patient outcomes. With this paper we take part in the HECKTOR 2021 challenge and we propose our methods for automatic tumor segmentation on PET and CT images of oropharyngeal cancer patients. To achieve this goal, we investigated different deep learning methods with the purpose of highlighting relevant image and modality related features, to refine the contour of the primary tumor. More specifically, we tested a Co-learning method [1] and a 3D Skip Spatial and Channel Squeeze and Excitation Multi-Scale Attention method (Skip-scSE-M), on the challenge dataset. The best results achieved on the test set were 0.762 mean Dice Similarity Score and 3.143 median of the Hausdorf Distance at 95 %.</p

    Preparing CT imaging datasets for deep learning in lung nodule analysis:Insights from four well-known datasets

    Get PDF
    Background: Deep learning is an important means to realize the automatic detection, segmentation, and classification of pulmonary nodules in computed tomography (CT) images. An entire CT scan cannot directly be used by deep learning models due to image size, image format, image dimensionality, and other factors. Between the acquisition of the CT scan and feeding the data into the deep learning model, there are several steps including data use permission, data access and download, data annotation, and data preprocessing. This paper aims to recommend a complete and detailed guide for researchers who want to engage in interdisciplinary lung nodule research of CT images and Artificial Intelligence (AI) engineering.Methods: The data preparation pipeline used the following four popular large-scale datasets: LIDC-IDRI (Lung Image Database Consortium image collection), LUNA16 (Lung Nodule Analysis 2016), NLST (National Lung Screening Trial) and NELSON (The Dutch-Belgian Randomized Lung Cancer Screening Trial). The dataset preparation is presented in chronological order.Findings: The different data preparation steps before deep learning were identified. These include both more generic steps and steps dedicated to lung nodule research. For each of these steps, the required process, necessity, and example code or tools for actual implementation are provided.Discussion and conclusion: Depending on the specific research question, researchers should be aware of the various preparation steps required and carefully select datasets, data annotation methods, and image preprocessing methods. Moreover, it is vital to acknowledge that each auxiliary tool or code has its specific scope of use and limitations. This paper proposes a standardized data preparation process while clearly demonstrating the principles and sequence of different steps. A data preparation pipeline can be quickly realized by following these proposed steps and implementing the suggested example codes and tools.</p

    Self-supervised Multi-modality Image Feature Extraction for the Progression Free Survival Prediction in Head and Neck Cancer

    Get PDF
    Long-term survival of oropharyngeal squamous cell carcinoma patients (OPSCC) is quite poor. Accurate prediction of Progression Free Survival (PFS) before treatment could make identification of high-risk patients before treatment feasible which makes it possible to intensify or de-intensify treatments for high- or low-risk patients. In this work, we proposed a deep learning based pipeline for PFS prediction. The proposed pipeline consists of three parts. Firstly, we utilize the pyramid autoencoder for image feature extraction from both CT and PET scans. Secondly, the feed forward feature selection method is used to remove the redundant features from the extracted image features as well as clinical features. Finally, we feed all selected features to a DeepSurv model for survival analysis that outputs the risk score on PFS on individual patients. The whole pipeline was trained on 224 OPSCC patients. We have achieved a average C-index of 0.7806 and 0.7967 on the independent validation set for task 2 and task 3. The C-indices achieved on the test set are 0.6445 and 0.6373, respectively. It is demonstrated that our proposed approach has the potential for PFS prediction and possibly for other survival endpoints.</p

    Possible Bias in Supervised Deep Learning Algorithms for CT Lung Nodule Detection and Classification

    No full text
    Artificial Intelligence (AI) algorithms for automatic lung nodule detection and classification can assist radiologists in their daily routine of chest CT evaluation. Even though many AI algorithms for these tasks have already been developed, their implementation in the clinical workflow is still largely lacking. Apart from the significant number of false-positive findings, one of the reasons for that is the bias that these algorithms may contain. In this review, different types of biases that may exist in chest CT AI nodule detection and classification algorithms are listed and discussed. Examples from the literature in which each type of bias occurs are presented, along with ways to mitigate these biases. Different types of biases can occur in chest CT AI algorithms for lung nodule detection and classification. Mitigation of them can be very difficult, if not impossible to achieve completely

    Facial Imaging and Diagnosis System for Neurological Disorders

    No full text
    Bell’s palsy is a peripheral nerve disease of unknown origin which affects the facial nerve and causes a unilateral weakness or paralysis of the facial muscles. It has a worldwide incidence rate of approximately 0.01-0.04 %. Prompt treatment with anti-inflammatory and antiviral drugs can stimulate clinical recovery, but despite treatment, in two out of ten patients the recovery is incomplete and severe, and persistent facial paralysis occurs in 12 % of patients.In some cases, early and accurate diagnosis is a difficult task even for experienced clinicians. Currently, the only way to differentiate between facial palsy types is by clinical observation. This neurological examination however is subjective and lacks accuracy, which may result in a delayed diagnosis and adequate treatment, and related poor recovery. The current diagnostics can be improved by developing a method that is more sensitive to detect reduced facial movements and can better discriminate between palsy types. The goal of this study is to create a new technological system based on quantification offacial movements and artificial intelligence. This would have considerable relevance, both clinical and in the setting of research. With this system diagnosis will be guided in primary as well as secondary health care in order to distinguish Bell’s (peripheral) palsy, a relatively benign cause of facial paralysis, from a stroke (central palsy), which is a potentially lethal disease. To reach this ambitious goal, the system will need to be able to analyze visual data in two sequential steps to answer the following questions:Step 1: Normal face or abnormal face? Step 2: Central facial palsy or peripheral facial palsy? Our system uses existing facial recognition software to identify key facial features. Using these features, some quantified measures are then defined which help us to distinguish a normal froman abnormal face and a central from a peripheral facial palsy. These measures were selected so that the best accuracy in each particular step is achieved, given some restrictions that apply in each case. The pictures that were used include healthy people with facial symmetry, patients with Bell’s palsy and patients who suffered from a stroke both of which have facial asymmetry. At the end, by manually selecting a few thresholds for our measures, and given that the landmarks are manually detected, an accuracy of 99.51 % is achieved for step 1 (99.65 % F1 score) and a 90.91 % for step 2 (93.26 % F1 score).Biomedical Engineerin

    Standardization of Artificial Intelligence Development in Radiotherapy

    Get PDF
    Application of Artificial Intelligence (AI) tools has recently gained interest in the fields of medical imaging and radiotherapy. Even though there have been many papers published in these domains in the last few years, clinical assessment of the proposed AI methods is limited due to the lack of standardized protocols that can be used to validate the performance of the developed tools. Moreover, each stakeholder uses their own methods, tools, and evaluation criteria. Communication between different stakeholders is limited or absent, which makes it hard to easily exchange models between different clinics. These issues are not limited to radiotherapy but exist in every AI application domain. To deal with these issues, methods like the Machine Learning Canvas, Datasets for Datasheets, and Model cards have been developed. They aim to provide information of the whole creation pipeline of AI solutions, of the datasets used to develop AI, along with their biases, as well as to facilitate easier collaboration/communication between different stakeholders and facilitate the clinical introduction of AI. This work introduces the concepts of these 3 open-source solutions including the author's experiences applying them to AI applications for radiotherapy

    Skip-SCSE Multi-scale Attention and Co-learning Method for Oropharyngeal Tumor Segmentation on Multi-modal PET-CT Images

    No full text
    One of the primary treatment options for head and neck cancer is (chemo)radiation. Accurate delineation of the contour of the tumors is of great importance in the successful treatment of the tumor and in the prediction of patient outcomes. With this paper we take part in the HECKTOR 2021 challenge and we propose our methods for automatic tumor segmentation on PET and CT images of oropharyngeal cancer patients. To achieve this goal, we investigated different deep learning methods with the purpose of highlighting relevant image and modality related features, to refine the contour of the primary tumor. More specifically, we tested a Co-learning method [1] and a 3D Skip Spatial and Channel Squeeze and Excitation Multi-Scale Attention method (Skip-scSE-M), on the challenge dataset. The best results achieved on the test set were 0.762 mean Dice Similarity Score and 3.143 median of the Hausdorf Distance at 95 %

    Self-supervised Multi-modality Image Feature Extraction for the Progression Free Survival Prediction in Head and Neck Cancer

    No full text
    Long-term survival of oropharyngeal squamous cell carcinoma patients (OPSCC) is quite poor. Accurate prediction of Progression Free Survival (PFS) before treatment could make identification of high-risk patients before treatment feasible which makes it possible to intensify or de-intensify treatments for high- or low-risk patients. In this work, we proposed a deep learning based pipeline for PFS prediction. The proposed pipeline consists of three parts. Firstly, we utilize the pyramid autoencoder for image feature extraction from both CT and PET scans. Secondly, the feed forward feature selection method is used to remove the redundant features from the extracted image features as well as clinical features. Finally, we feed all selected features to a DeepSurv model for survival analysis that outputs the risk score on PFS on individual patients. The whole pipeline was trained on 224 OPSCC patients. We have achieved a average C-index of 0.7806 and 0.7967 on the independent validation set for task 2 and task 3. The C-indices achieved on the test set are 0.6445 and 0.6373, respectively. It is demonstrated that our proposed approach has the potential for PFS prediction and possibly for other survival endpoints
    corecore